1,684 research outputs found

    Top properties: prospects at CMS

    Get PDF
    With about 10 millions top-pair events per year at low luminosity the Large Hadron Collider will be a Top Factory. Precision measurements in the top quark sector will be performed allowing for detailed study on electroweak (and flavor) symmetry breaking mechanism and also will help in constraining the Standard Model. A review of the measurements of the top quark properties is given, together with indications about the potential of CMS

    CMS workload management

    Get PDF
    From september 2007 the LHC accelerator will start its activity and CMS, one of the four experiments, will begin to take data. The CMS computing model is based on the the Grid paradigm where data is deployed and accessed on a number of geographically distributed computing centers. In addition to real data events, a large number of simulated ones will be produced in a similar, distributed manner. Both real and simulated data will be analyzed by physicist, at an expected rate of 100000 jobs per day submitted to the Grid infrastructure. In order to reach these goals, CMS is developing two tools for the workload management (plus a set of services): ProdAgent and CRAB. The ProdAgent deals with MonteCarlo production system: it creates and configures jobs, interacts with the Framework, merges outputs to a reasonable filesize and publishes the simulated data back into CMS data bookkeeping and data location services. CRAB (Cms Remote Analysis Builder) is the tool deployed ad hoc by CMS to access those remote data. CRAB allows a generic user, without specific knowledge of the Grid infrastructure, to access data and perform its analysis as simply as in a local environment. CRAB takes care to interact with all Data Management services, from data discovery and location to output file management. An overview of the current implementation of the components of the CMS workload management is presented in this work

    Monte Carlo simulations of soft proton flares: testing the physics with XMM-Newton

    Get PDF
    Low energy protons (<100-300 keV) in the Van Allen belt and the outer regions can enter the field of view of X-ray focusing telescopes, interact with the Wolter-I optics, and reach the focal plane. The use of special filters protects the XMM-Newton focal plane below an altitude of 70000 km, but above this limit the effect of soft protons is still present in the form of sudden flares in the count rate of the EPIC instruments, causing the loss of large amounts of observing time. We try to characterize the input proton population and the physics interaction by simulating, using the BoGEMMS framework, the proton interaction with a simplified model of the X-ray mirror module and the focal plane, and comparing the result with a real observation. The analysis of ten orbits of observations of the EPIC/pn instrument show that the detection of flares in regions far outside the radiation belt is largely influenced by the different orientation of the Earth's magnetosphere respect with XMM-Newton's orbit, confirming the solar origin of the soft proton population. The Equator-S proton spectrum at 70000 km altitude is used for the proton population entering the optics, where a combined multiple and Firsov scattering is used as physics interaction. If the thick filter is used, the soft protons in the 30-70 keV energy range are the main contributors to the simulated spectrum below 10 keV. We are able to reproduce the proton vignetting observed in real data-sets, with a 50\% decrease from the inner to the outer region, but a maximum flux of 0.01 counts cm-2 s-1 keV-1 is obtained below 10 keV, about 5 times lower than the EPIC/MOS detection and 100 times lower than the EPIC/pn one. Given the high variability of the flare intensity, we conclude that an average spectrum, based on the analysis of a full season of soft proton events is required to compare Monte Carlo simulations with real events

    Machine Learning as a Service for High Energy Physics on heterogeneous computing resources

    Get PDF
    Machine Learning (ML) techniques in the High-Energy Physics (HEP) domain are ubiquitous and will play a significant role also in the upcoming High-Luminosity LHC (HL-LHC) upgrade foreseen at CERN: a huge amount of data will be produced by LHC and collected by the ex- periments, facing challenges at the exascale. Despite ML models are successfully applied in many use-cases (online and offline reconstruction, particle identification, detector simulation, Monte Carlo generation, just to name a few) there is a constant seek for scalable, performant, and production-quality operations of ML-enabled workflows. In addition, the scenario is complicated by the gap among HEP physicists and ML experts, caused by the specificity of some parts of the HEP typical workflows and solutions, and by the difficulty to formulate HEP problems in a way that match the skills of the Computer Science (CS) and ML community and hence its potential ability to step in and help. Among other factors, one of the technical obstacles resides in the difference of data-formats used by ML-practitioners and physicists, where the former use mostly flat-format data representations while the latter use to store data in tree-based objects via the ROOT data format. Another obstacle to further development of ML techniques in HEP resides in the difficulty to secure the adequate computing resources for training and inference of ML models, in a scalable and transparent way in terms of CPU vs GPU vs TPU vs other resources, as well as local vs cloud resources. This yields a technical barrier that prevents a relatively large portion of HEP physicists from fully accessing the potential of ML-enabled systems for scientific research. In order to close this gap, a Machine Learning as a Service for HEP (MLaaS4HEP) solution is presented as a product of R&amp;D activities within the CMS experiment. It offers a service that is capable to directly read ROOT-based data, use the ML solution provided by the user, and ultimately serve predictions by pre-trained ML models “as a service” accessible via HTTP protocol. This solution can be used by physicists or experts outside of HEP domain and it provides access to local or remote data storage without requiring any modification or integration with the experiment specific framework. Moreover, MLaaS4HEP is built with a modular design allowing independent resource allocation that opens up a possibility to train ML models on PB-size datasets remotely accessible from the WLCG sites without physically downloading data into local storage. To prove the feasibility and utility of the MLaaS4HEP service with large datasets and thus be ready for the next future when an increase of data produced is expected, an exploration of different hardware resources is required. In particular, this work aims to provide the MLaaS4HEP service transparent access to heterogeneous resources, which opens up the usage of more powerful resources without requiring any effort from the user side during the access and use phase

    Prototype of a cloud native solution of Machine Learning as Service for HEP

    Get PDF
    To favor the usage of Machine Learning (ML) techniques in High-Energy Physics (HEP) analyses it would be useful to have a service allowing to perform the entire ML pipeline (in terms of reading the data, training a ML model, and serving predictions) directly using ROOT files of arbitrary size from local or remote distributed data sources. The MLaaS4HEP framework aims to provide such kind of solution. It was successfully validated with a CMS physics use case which gave important feedback about the needs of analysts. For instance, we introduced the possibility for the user to provide pre-processing operations, such as defining new branches and applying cuts. To provide a real service for the user and to integrate it into the INFN Cloud, we started working on MLaaS4HEP cloudification. This would allow to use cloud resources and to work in a distributed environment. In this work, we provide updates on this topic, and in particular, we discuss our first working prototype of the service. It includes an OAuth2 proxy server as authentication/authorization layer, a MLaaS4HEP server, an XRootD proxy server for enabling access to remote ROOT data, and the TensorFlow as a Service (TFaaS) service in charge of the inference phase. With this architecture the user is able to submit ML pipelines, after being authenticated and authorized, using local or remote ROOT files simply using HTTP call

    CMB Observations: improvements of the performance of correlation radiometers by signal modulation and synchronous detection

    Get PDF
    Observation of the fine structures (anisotropies, polarization, spectral distortions) of the Cosmic Microwave Background (CMB) is hampered by instabilities, 1/f noise and asymmetries of the radiometers used to carry on the measurements. Addition of modulation and synchronous detection allows to increase the overall stability and the noise rejection of the radiometers used for CMB studies. In this paper we discuss the advantages this technique has when we try to detect CMB polarization. The behaviour of a two channel correlation receiver to which phase modulation and synchronous detection have been added is examined. Practical formulae for evaluating the improvements are presented.Comment: 18 pages, 3 figures, New Astronomy accepte

    Coating samples for the BEaTriX mirrors: surface roughness analysis

    Get PDF
    This brief document reports the characterizations of X-ray reflectivity and scattering performed on platinum coating samples deposited at DTU on superpolished (rms < 2 Å) fused silica substrates. The samples represent preliminary tests for the deposition of the reflective coating on the parabolic mirror (in HOQ310 fused quartz) of the BEaTriX X-ray facility, in construction at INAF-OAB. The X-ray characterizations that have been performed at INAF-OAB encompass XRR tests at 8.045 keV, and low-res XRS measurements at the same energy. While the former returns a single roughness rms value within a non-precisely identified spatial frequency range, the latter return a reliable method for an independent measurement of the surface PSD

    SWORDS - SoftWare fOR Diffraction Simulation of silicon pore optics: the user manual

    Get PDF
    This brief document is a quick guide through the usage of SWORDS, a SPO diffraction simulation tool developed in the SImPOSIUM project for the ATHENA X-ray telescope under IDL® environment. This tool allows the user to simulate the PSF of an SPO XOU or MM using physical optics, therefore including the effects of diffraction and figure errors. This program, released to ESA in the ISR of the project, is to some extent the 2D equivalent of the WISE simulation code working in 1D for grazing incidence mirrors. The code is still been developed, but already fully functional. A previous manual for the simulation tool (release 3.0) was already issued in 2020. The current manual refers to the version 3.7.2 of SWORDS. Some new functionalities to the program include: - correction of numerical bugs, label reorganization, and window restyling; - uniform curvature profile as an alternative to Wolter-I; - gaussian line profile for non-monochromatic sources; - gaussian-shaped sources; - random (quasi-gaussian) phases in sinusoidal errors; - random (quasi-gaussian) amplitudes in sinusoidal errors; - memory cleanup during and after each routine execution; - improved computation speed by exchanging image cropping and resampling; - custom path selectable by the user; - added P-H misalignment around the z-axis; - constraint to minimum pixel size; - maximum CCD array size limited to 4000 ́ 4000; - azimuthal edge effects; - 2D polynomial errors; - aberrations due to finite distance and off-axis of the source; - capability to load and use a set measured plate(s) figure error from ASCII file(s); - inclusion/exclusion of the OPD terms into/from the computation of the diffraction figure; - wavefront leveling for keeping the focal spot centered in the field; - automatic best focus computation; - *.fits output formatted to match fits standards

    Analytical computation of stray light in nested mirror modules for X-ray telescopes

    Get PDF
    Stray light in X-ray telescopes are a well-known issue. Unlike rays focused via a double reflection by usual grazing-incidence geometries such as the Wolter-I, stray rays coming from off-axis sources are reflected only once by either the parabolic or the hyperbolic segment. Although not focused, stray light may represent a major source of background and ghost images especially when observing a field of faint sources in the vicinities of another, more intense, just outside the field of view of the telescope. The stray light problem is faced by mounting a pre-collimator in front of the mirror module, in order to shade a part of the reflective surfaces that may give rise to singly-reflected rays. Studying the expected stray light impact, and consequently designing a pre-collimator, is a typical ray-tracing problem, usually time and computation consuming, especially if we consider that rays propagate throughout a densely nested structure. This in turn requires one to pay attention to all the possible obstructions, increasing the complexity of the simulation. In contrast, approaching the problems of stray light calculation from an analytical viewpoint largely simplifies the problem, and may also ease the task of designing an effective pre-collimator. In this work we expose an analytical formalism that can be used to compute the stray light in a nested optical module in a fast and effective way, accounting for obstruction effects

    X-ray beam-shaping via deformable mirrors: surface profile and point spread function computation for Gaussian beams using physical optics

    Get PDF
    X-ray mirrors with high focusing performances are commonly used in different sectors of science, such as X-ray astronomy, medical imaging and synchrotron/free-electron laser beamlines. While deformations of the mirror profile may cause degradation of the focus sharpness, a deliberate deformation of the mirror can be made to endow the focus with a desired size and distribution, via piezo actuators. The resulting profile can be characterized with suitable metrology tools and correlated with the expected optical quality via a wavefront propagation code or, sometimes, predicted using geometric optics. In the latter case and for the special class of profile deformations with monotonically increasing derivative, i.e. concave upwards, the point spread function (PSF) can even be predicted analytically. Moreover, under these assumptions, the relation can also be reversed: from the desired PSF the required profile deformation can be computed analytically, avoiding the use of trial-and-error search codes. However, the computation has been so far limited to geometric optics, which entailed some limitations: for example, mirror diffraction effects and the size of the coherent X-ray source were not considered. In this paper, the beam-shaping formalism in the framework of physical optics is reviewed, in the limit of small light wavelengths and in the case of Gaussian intensity wavefronts. Some examples of shaped profiles are also shown, aiming at turning a Gaussian intensity distribution into a top-hat one, and checks of the shaping performances computing the at-wavelength PSF by means of the WISE code are made
    corecore